Skip to content

Conversation

@ProgramadorArtificial
Copy link
Contributor

@ProgramadorArtificial ProgramadorArtificial commented Jan 13, 2024

  • Fix sweep to keep the best model. It was getting the memory location and not the actual object (at the end of the training, the last model trained were in the "best_model" variable);
  • Add to save the best_score of the first model that runs as well.

📚 Documentation preview 📚: https://pytorch-tabular--371.org.readthedocs.build/en/371/

@manujosephv
Copy link
Collaborator

manujosephv commented Jan 14, 2024

@ProgramadorArtificial Thanks for the fix. It makes sense. Merging it

Edit: On second thoughts, copying the TabularModel maynot be the best way because TabularModel contains the TabularDatamodule, which in turn may contain (depending on the caching option) the entire training data. Maybe we can keep the tabular_model.model in the best_model and after the sweep do the following with the latest tabular_model
make tabular_model.datamodule = None before deep copying it, and as the end of the sweep, set best_model.datamodule = tabular_model.datamodule`

?

@manujosephv manujosephv enabled auto-merge (squash) January 14, 2024 01:08
@manujosephv manujosephv disabled auto-merge January 14, 2024 01:11
Copy link
Collaborator

@manujosephv manujosephv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like I mentioned in the comments...
tabular_model.datamodule = None before deep copy

@manujosephv
Copy link
Collaborator

Awesome! Looks good. Thanks for this PR

@manujosephv manujosephv enabled auto-merge (squash) January 14, 2024 12:48
auto-merge was automatically disabled January 14, 2024 13:01

Head branch was pushed to by a user without write access

@manujosephv manujosephv merged commit beb29d8 into pytorch-tabular:main Jan 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants